440 research outputs found

    The impact of beam deconvolution on noise properties in CMB measurements: Application to Planck LFI

    Full text link
    We present an analysis of the effects of beam deconvolution on noise properties in CMB measurements. The analysis is built around the artDeco beam deconvolver code. We derive a low-resolution noise covariance matrix that describes the residual noise in deconvolution products, both in harmonic and pixel space. The matrix models the residual correlated noise that remains in time-ordered data after destriping, and the effect of deconvolution on it. To validate the results, we generate noise simulations that mimic the data from the Planck LFI instrument. A χ2\chi^2 test for the full 70 GHz covariance in multipole range ℓ=0−50\ell=0-50 yields a mean reduced χ2\chi^2 of 1.0037. We compare two destriping options, full and independent destriping, when deconvolving subsets of available data. Full destriping leaves substantially less residual noise, but leaves data sets intercorrelated. We derive also a white noise covariance matrix that provides an approximation of the full noise at high multipoles, and study the properties on high-resolution noise in pixel space through simulations.Comment: 22 pages, 25 figure

    Application of beam deconvolution technique to power spectrum estimation for CMB measurements

    Get PDF
    We present two novel methods for the estimation of the angular power spectrum of cosmic microwave background (CMB) anisotropies. We assume an absolute CMB experiment with arbitrary asymmetric beams and arbitrary sky coverage. The methods differ from the earlier ones in that the power spectrum is estimated directly from the time-ordered data, without first compressing the data into a sky map, and they take into account the effect of asymmetric beams. In particular, they correct the beam-induced leakage from temperature to polarization. The methods are applicable to a case where part of the sky has been masked out to remove foreground contamination, leaving a pure CMB signal, but incomplete sky coverage. The first method (deconvolution quadratic maximum likelihood) is derived as the optimal quadratic estimator, which simultaneously yields an unbiased spectrum estimate and minimizes its variance. We successfully apply it to multipoles up to l = 200. The second method is derived as a weak-signal approximation from the first one. It yields an unbiased estimate for the full multipole range, but relaxes the requirement of minimal variance. We validate the methods with simulations for the 70 GHz channel of Planck surveyor, and demonstrate that we are able to correct the beam effects in the TT, EE, BB and TE spectra up to multipole l = 1500. Together, the two methods cover the complete multipole range with no gap in between.Peer reviewe

    Planck intermediate results XXIII : Galactic plane emission components derived from Planck with ancillary data

    Get PDF
    Planck data when combined with ancillary data provide a unique opportunity to separate the diffuse emission components of the inner Galaxy. The purpose of the paper is to elucidate the morphology of the various emission components in the strong star-formation region lying inside the solar radius and to clarify the relationship between the various components. The region of the Galactic plane covered is 1 = 300 degrees -> 0 degrees -> 60 degrees where star-formation is highest and the emission is strong enough to make meaningful component separation. The latitude widths in this longitude range lie between 1 and 2, which correspond to FWHM z-widths of 100-200 pc at a typical distance of 6 kpc. The four emission components studied here are synchrotron, free-free, anomalous microwave emission (AME), and thermal (vibrational) dust emission. These components are identified by constructing spectral energy distributions (SEDs) at positions along the Galactic plane using the wide frequency coverage of Planck (28.4-857 GHz) in combination with low-frequency radio data at 0.408-2.3 GHz plus WMAP data at 23-94 GHz, along with far-infrared (FIR) data from COBE-DIRBE and IRAS. The free-free component is determined from radio recombination line (RRL) data. AME is found to be comparable in brightness to the free-free emission on the Galactic plane in the frequency range 20-40 GHz with a width in latitude similar to that of the thermal dust; it comprises 45 +/- 1% of the total 28.4 GHz emission in the longitude range 1 = 300 degrees -> 0 degrees -> 60 degrees. The free-free component is the narrowest, reflecting the fact that it is produced by current star-formation as traced by the narrow distribution of OB stars. It is the dominant emission on the plane between 60 and 100 GHz. RRLs from this ionized gas are used to assess its distance, leading to a free-free z-width of FWHM approximate to 100 pc. The narrow synchrotron component has a low-frequency brightness spectral index beta(synch) approximate to -2.7 that is similar to the broad synchrotron component indicating that they are both populated by the cosmic ray electrons of the same spectral index. The width of this narrow synchrotron component is significantly larger than that of the other three components, suggesting that it is generated in an assembly of older supernova remnants that have expanded to sizes of order 150 pc in 3 x 10(5) yr; pulsars of a similar age have a similar spread in latitude. The thermal dust is identified in the SEDs with average parameters of T-dust = 20.4 +/- 0.4 K, beta(FIR) = 1.94 +/- 0.03 (>353 GHz), and beta(mm) = 1.67 +/- 0.02 (Peer reviewe

    Beam-deconvolved Planck LFI maps

    Get PDF
    12 pages, 7 figuresThe Planck Collaboration made its final data release in 2018. In this paper we describe beam-deconvolution map products made from Planck LFI data using the artDeco deconvolution code to symmetrize the effective beam. The deconvolution results are auxiliary data products, available through the Planck Legacy Archive. Analysis of these deconvolved survey difference maps reveals signs of residual signal in the 30-GHz and 44-GHz frequency channels. We produce low-resolution maps and corresponding noise covariance matrices (NCVMs). The NCVMs agree reasonably well with the half-ring noise estimates except for 44\,GHz, where we observe an asymmetry between EEEE and BBBB noise spectra, possibly a sign of further unresolved systematics.Peer reviewe

    Euclid: Fast two-point correlation function covariance through linear construction

    Get PDF
    We present a method for fast evaluation of the covariance matrix for a two-point galaxy correlation function (2PCF) measured with the Landy-Szalay estimator. The standard way of evaluating the covariance matrix consists in running the estimator on a large number of mock catalogs, and evaluating their sample covariance. With large random catalog sizes (random-to-data objects' ratio M >> 1) the computational cost of the standard method is dominated by that of counting the data-random and random-random pairs, while the uncertainty of the estimate is dominated by that of data-data pairs. We present a method called Linear Construction (LC), where the covariance is estimated for small random catalogs with a size of M = 1 and M = 2, and the covariance for arbitrary M is constructed as a linear combination of the two. We show that the LC covariance estimate is unbiased. We validated the method with PINOCCHIO simulations in the range r = 20-200 h(-1) Mpc. With M = 50 and with 2h(-1) Mpc bins, the theoretical speedup of the method is a factor of 14. We discuss the impact on the precision matrix and parameter estimation, and present a formula for the covariance of covariance.Peer reviewe

    Exploring cosmic origins with CORE : Survey requirements and mission design

    Get PDF
    Future observations of cosmic microwave background (CMB) polarisation have the potential to answer some of the most fundamental questions of modern physics and cosmology, including: what physical process gave birth to the Universe we see today? What are the dark matter and dark energy that seem to constitute 95% of the energy density of the Universe? Do we need extensions to the standard model of particle physics and fundamental interactions? Is the ACDM cosmological scenario correct, or are we missing an essential piece of the puzzle? In this paper, we list the requirements for a future CMB polarisation survey addressing these scientific objectives, and discuss the design drivers of the CORE space mission proposed to ESA in answer to the "M5" call for a medium-sized mission. The rationale and options, and the methodologies used to assess the mission's performance, are of interest to other future CMB mission design studies. CORE has 19 frequency channels, distributed over a broad frequency range, spanning the 60-600 GHz interval, to control astrophysical foreground emission. The angular resolution ranges from 2' to 18', and the aggregate CMB sensitivity is about 2 mu K.arcmin. The observations are made with a single integrated focal-plane instrument, consisting of an array of 2100 cryogenically-cooled, linearly-polarised detectors at the focus of a 1.2-m aperture cross-Dragone telescope. The mission is designed to minimise all sources of systematic effects, which must be controlled so that no more than 10(-4) of the intensity leaks into polarisation maps, and no more than about 1% of E-type polarisation leaks into B-type modes. CORE observes the sky from a large Lissajous orbit around the Sun-Earth L2 point on an orbit that offers stable observing conditions and avoids contamination from sidelobe pick-up of stray radiation originating from the Sun, Earth, and Moon. The entire sky is observed repeatedly during four years of continuous scanning, with a combination of three rotations of the spacecraft over different timescales. With about 50% of the sky covered every few days, this scan strategy provides the mitigation of systematic effects and the internal redundancy that are needed to convincingly extract the primordial B-mode signal on large angular scales, and check with adequate sensitivity the consistency of the observations in several independent data subsets. CORE is designed as a "near-ultimate" CMB polarisation mission which, for optimal complementarity with ground-based observations, will perform the observations that are known to be essential to CMB polarisation science and cannot be obtained by any other means than a dedicated space mission. It will provide well-characterised, highly-redundant multi-frequency observations of polarisation at all the scales where foreground emission and cosmic variance dominate the final uncertainty for obtaining precision CMB science, as well as 2' angular resolution maps of high-frequency foreground emission in the 300-600 GHz frequency range, essential for complementarity with future ground-based observations with large telescopes that can observe the CMB with the same beamsize.Peer reviewe

    Euclid preparation : IX. EuclidEmulator2 – power spectrum emulation with massive neutrinos and self-consistent dark energy perturbations

    Get PDF
    We present a new, updated version of the EuclidEmulator (called EuclidEmulator2), a fast and accurate predictor for the nonlinear correction of the matter power spectrum. 2 per cent level accurate emulation is now supported in the eight-dimensional parameter space of w(0)w(a)CDM+Sigma m(nu) models between redshift z = 0 and z = 3 for spatial scales within the range . In order to achieve this level of accuracy, we have had to improve the quality of the underlying N-body simulations used as training data: (i) we use self-consistent linear evolution of non-dark matter species such as massive neutrinos, photons, dark energy, and the metric field, (ii) we perform the simulations in the so-called N-body gauge, which allows one to interpret the results in the framework of general relativity, (iii) we run over 250 high-resolution simulations with 3000(3) particles in boxes of 1(h(-1)Gpc)(3) volumes based on paired-and-fixed initial conditions, and (iv) we provide a resolution correction that can be applied to emulated results as a post-processing step in order to drastically reduce systematic biases on small scales due to residual resolution effects in the simulations. We find that the inclusion of the dynamical dark energy parameter w(a) significantly increases the complexity and expense of creating the emulator. The high fidelity of EuclidEmulator2 is tested in various comparisons against N-body simulations as well as alternative fast predictors such as HALOFIT, HMCode, and CosmicEmu. A blind test is successfully performed against the Euclid Flagship v2.0 simulation. Nonlinear correction factors emulated with EuclidEmulator2 are accurate at the level of or better for and zPeer reviewe

    Euclid preparation : XII. Optimizing the photometric sample of the Euclid survey for galaxy clustering and galaxy-galaxy lensing analyses

    Get PDF
    Photometric redshifts (photo-zs) are one of the main ingredients in the analysis of cosmological probes. Their accuracy particularly affects the results of the analyses of galaxy clustering with photometrically selected galaxies (GC(ph)) and weak lensing. In the next decade, space missions such as Euclid will collect precise and accurate photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this article we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from the Euclid mission. We focus on GC(ph) and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We then use the Fisher matrix formalism together with these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with an equal width in redshift provide a higher figure of merit (FoM) than equipopulated bins and that increasing the number of redshift bins from ten to 13 improves the FoM by 35% and 15% for GC(ph) and its combination with GGL, respectively. For GC(ph), an increase in the survey depth provides a higher FoM. However, when we include faint galaxies beyond the limit of the spectroscopic training data, the resulting FoM decreases because of the spurious photo-zs. When combining GC(ph) and GGL, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. Adding galaxies at faint magnitudes and high redshift increases the FoM, even when they are beyond the spectroscopic limit, since the number density increase compensates for the photo-z degradation in this case. We conclude that there is more information that can be extracted beyond the nominal ten tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample since they can degrade the cosmological constraints.Peer reviewe
    • 

    corecore